6 research outputs found

    A Data-Driven Appearance Model for Human Fatigue

    Get PDF
    Humans become visibly tired during physical activity. After a set of squats, jumping jacks or walking up a flight of stairs, individuals start to pant, sweat, loose their balance, and flush. Simulating these physiological changes due to exertion and exhaustion on an animated character greatly enhances a motion’s realism. These fatigue factors depend on the mechanical, physical, and biochemical function states of the human body. The difficulty of simulating fatigue for character animation is due in part to the complex anatomy of the human body. We present a multi-modal capturing technique for acquiring synchronized biosignal data and motion capture data to enhance character animation. The fatigue model utilizes an anatomically derived model of the human body that includes a torso, organs, face, and rigged body. This model is then driven by biosignal output. Our animations show the wide range of exhaustion behaviors synthesized from real biological data output. We demonstrate the fatigue model by augmenting standard motion capture with exhaustion effects to produce more realistic appearance changes during three exercise examples. We compare the fatigue model with both simple procedural methods and a dense marker set data capture of exercise motions

    Intelligent Camera Control Using Behavior Trees

    Get PDF
    Automatic camera systems produce very basic animations for virtual worlds. Users often view environments through two types of cameras: a camera that they control manually, or a very basic automatic camera that follows their character, minimizing occlusions. Real cinematography features much more variety producing more robust stories. Cameras shoot establishing shots, close-ups, tracking shots, and bird’s eye views to enrich a narrative. Camera techniques such as zoom, focus, and depth of field contribute to framing a particular shot. We present an intelligent camera system that automatically positions, pans, tilts, zooms, and tracks events occurring in real-time while obeying traditional standards of cinematography. We design behavior trees that describe how a single intelligent camera might behave from low-level narrative elements assigned by “smart events”. Camera actions are formed by hierarchically arranging behavior sub-trees encapsulating nodes that control specific camera semantics. This approach is more modular and particularly reusable for quickly creating complex camera styles and transitions rather then focusing only on visibility. Additionally, our user interface allows a director to provide further camera instructions, such as prioritizing one event over another, drawing a path for the camera to follow, and adjusting camera settings on the fly.We demonstrate our method by placing multiple intelligent cameras in a complicated world with several events and storylines, and illustrate how to produce a well-shot “documentary” of the events constructed in real-time

    Efficient Motion Retrieval in Large Motion Databases

    Get PDF
    There has been a recent paradigm shift in the computer animation industry with an increasing use of pre-recorded motion for animating virtual characters. A fundamental requirement to using motion capture data is an efficient method for indexing and retrieving motions. In this paper, we propose a flexible, efficient method for searching arbitrarily complex motions in large motion databases. Motions are encoded using keys which represent a wide array of structural, geometric and, dynamic features of human motion. Keys provide a representative search space for indexing motions and users can specify sequences of key values as well as multiple combination of key sequences to search for complex motions. We use a trie-based data structure to provide an efficient mapping from key sequences to motions. The search times (even on a single CPU) are very fast, opening the possibility of using large motion data sets in real-time applications

    Recreating Early Islamic Glass Lamp Lighting

    Get PDF
    Early Islamic light sources are not simple, static, uniform points, and the fixtures themselves are often combinations of glass, water, fuel and flame. Various physically based renderers such as Radiance are widely used for modeling ancient architectural scenes; however they rarely capture the true ambiance of the environment due to subtle lighting effects. Specifically, these renderers often fail to correctly model complex caustics produced by glass fixtures, water level, and fuel sources. While the original fixtures of the 8th through 10th century Mosque of Córdoba in Spain have not survived, we have applied information gathered from earlier and contemporary sites and artifacts, including those from Byzantium, to assume that it was illuminated by either single jar lamps or supported by polycandela that cast unique downward caustic lighting patterns which helped individuals to navigate and to read. To re-synthesize such lighting, we gathered experimental archaeological data and investigated and validated how various water levels and glass fixture shapes, likely used during early Islamic times, changed the overall light patterns and downward caustics. In this paper, we propose a technique called Caustic Cones, a novel data-driven method to ‘shape’ the light emanating from the lamps to better recreate the downward lighting without resorting to computationally expensive photon mapping renderers. Additionally, we demonstrate on a rendering of the Mosque of Cordoba how our approach greatly benefits archaeologists and architectural historians by providing a more authentic visual simulation of early Islamic glass lamp lighting

    Human Model Reaching, Grasping, Looking and Sitting Using Smart Objects

    Get PDF
    Manually creating convincing animated human motion in a 3D ergonomic test environment is tedious and time consuming. However, procedural motion generators help animators efficiently produce complex and realistic motions. Using the concept of a Human Modeling Software Testbed (HMST), we created novel procedural methods for animating reaching, grasping, looking, and sitting using the environmental context of ‘smart’ objects that parametrically guide human model ergonomic motions. This approach enabled complicated procedures such as collision-free leg reach and contextual sitting motion generation. By procedurally adding small secondary details to the animation, such as head/eye vision constraints and prehensile grasps, the animated motions look more natural with minimal animator input. A ‘smart’ object in the scene graph provides specific parameters to produce proper motions and final positions. These parameters are applied to the desired figure procedurally to create any secondary motions, and further generalize to any environment. Our system allows users to proceed with any required ergonomic analyses with confidence in the visual validity of the automated motions
    corecore